Skip to content

Conversation

@zimeg
Copy link
Member

@zimeg zimeg commented Dec 18, 2025

Summary

This PR updates the chat_stream helper to support and flush "chunks".

Testing

The following snippet might be useful in testing:

import time

from slack_sdk import WebClient
from slack_sdk.models.messages.chunk import MarkdownTextChunk, TaskUpdateChunk

streamer = client.chat_stream(
    channel="C0123456789",
    recipient_team_id="T0123456789",
    recipient_user_id="U0123456789",
    thread_ts="10101010101.010101",
)

streamer.append(
    chunks=[
        MarkdownTextChunk(
            text="Hello.\nI have received the task. ",
        ),
        MarkdownTextChunk(
            text="This task appears manageable.\nThat is good.",
        ),
        TaskUpdateChunk(
            id="001",
            title="Understanding the task...",
            status="in_progress",
            details="- Identifying the goal\n- Identifying constraints",
        ),
        TaskUpdateChunk(
            id="002",
            title="Performing acrobatics...",
            status="pending",
        ),
    ],
)
time.sleep(4)

streamer.append(
    chunks=[
        TaskUpdateChunk(
            id="001",
            title="Understanding the task...",
            status="complete",
            details="\n- Pretending this was obvious",
            output="We'll continue to ramble now",
        ),
        TaskUpdateChunk(
            id="002",
            title="Performing acrobatics...",
            status="in_progress",
        ),
    ],
)
time.sleep(4)

streamer.stop(
    chunks=[
        TaskUpdateChunk(
            id="002",
            title="Performing acrobatics...",
            status="complete",
            details="- Jumped atop ropes\n- Juggled bowling pins\n- Rode a single wheel too",
        ),
        MarkdownTextChunk(
            text="The crowd appears to be astouned and applauds :popcorn:"
        ),
    ],
)

Category

  • slack_sdk.web.WebClient (sync/async) (Web API client)
  • /docs (Documents)
  • tests/integration_tests (Automated tests for this library)

Requirements

  • I've read and understood the Contributing Guidelines and have done my best effort to follow them.
  • I've read and agree to the Code of Conduct.
  • I've run python3 -m venv .venv && source .venv/bin/activate && ./scripts/run_validation.sh after making the changes.

@codecov
Copy link

codecov bot commented Dec 18, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (feat-ai-apps-thinking-steps@1c84f7f). Learn more about missing BASE report.
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@                      Coverage Diff                       @@
##             feat-ai-apps-thinking-steps    #1809   +/-   ##
==============================================================
  Coverage                               ?   83.90%           
==============================================================
  Files                                  ?      116           
  Lines                                  ?    13192           
  Branches                               ?        0           
==============================================================
  Hits                                   ?    11069           
  Misses                                 ?     2123           
  Partials                               ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@zimeg zimeg marked this pull request as ready for review January 14, 2026 20:05
@zimeg zimeg requested a review from a team as a code owner January 14, 2026 20:05
Copy link
Contributor

@srtaalej srtaalej left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this ⭐ ⭐ ⭐ left some thoughts on the empty append - would love to hear your thoughts

channel: str,
ts: str,
markdown_text: str,
markdown_text: Optional[str] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🌟 it is much easier to have both markdown and chunks optional!

recipient_team_id="T0123456789",
recipient_user_id="U0123456789",
thread_ts="123.000",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i tested this with an empty call to .append() and i think we should raise an error/ warning for when append is sent empty 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A warning is a nice idea. I wouldn't consider it a blocker for the PR, if we want to move forward and add the warning later.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srtaalej @mwbrooks I super appreciate the suggestions! The API would return an error in this case, but the SDK helper won't... We should make that more obvious in a follow up 👁️‍🗨️ ✨

Copy link
Member

@mwbrooks mwbrooks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Nice, thanks for updating the stream helper! 🙇🏻

🧪 Manual testing works! Below is the modified listeners/assistant/message.py that I used based on the PR test steps.

bolt-python-assistant-template/listeners/assistant/message.py:
from logging import Logger
from typing import Dict, List

from slack_bolt import BoltContext, Say, SetStatus
from slack_sdk import WebClient

from ai.llm_caller import call_llm

from ..views.feedback_block import create_feedback_block

import time
from slack_sdk.models.messages.chunk import MarkdownTextChunk, TaskUpdateChunk, URLSource


def message(
    client: WebClient,
    context: BoltContext,
    logger: Logger,
    payload: dict,
    say: Say,
    set_status: SetStatus,
):
    """
    Handles when users send messages or select a prompt in an assistant thread and generate AI responses:

    Args:
        client: Slack WebClient for making API calls
        context: Bolt context containing channel and thread information
        logger: Logger instance for error tracking
        payload: Event payload with message details (channel, user, text, etc.)
        say: Function to send messages to the thread
        set_status: Function to update the assistant's status
    """
    try:
        channel_id = payload["channel"]
        team_id = context.team_id
        thread_ts = payload["thread_ts"]
        user_id = context.user_id

        set_status(
            status="thinking...",
            loading_messages=[
                "Teaching the hamsters to type faster…",
                "Untangling the internet cables…",
                "Consulting the office goldfish…",
                "Polishing up the response just for you…",
                "Convincing the AI to stop overthinking…",
            ],
        )

        replies = client.conversations_replies(
            channel=context.channel_id,
            ts=context.thread_ts,
            oldest=context.thread_ts,
            limit=10,
        )
        messages_in_thread: List[Dict[str, str]] = []
        for message in replies["messages"]:
            role = "user" if message.get("bot_id") is None else "assistant"
            messages_in_thread.append({"role": role, "content": message["text"]})

        streamer = client.chat_stream(
            channel=channel_id,
            recipient_team_id=team_id,
            recipient_user_id=user_id,
            thread_ts=thread_ts,
        )

        streamer.append(
            chunks=[
                MarkdownTextChunk(
                    text="Hello.\nI have received the task. ",
                ),
                MarkdownTextChunk(
                    text="This task appears manageable.\nThat is good.",
                ),
                TaskUpdateChunk(
                    id="001",
                    title="Understanding the task...",
                    status="in_progress",
                    details="- Identifying the goal\n- Identifying constraints",
                ),
                TaskUpdateChunk(
                    id="002",
                    title="Performing acrobatics...",
                    status="pending",
                ),
            ],
        )

        time.sleep(4)

        streamer.append(
            chunks=[
                TaskUpdateChunk(
                    id="001",
                    title="Understanding the task...",
                    status="complete",
                    details="\n- Pretending this was obvious",
                    output="We'll continue to ramble now",
                ),
                TaskUpdateChunk(
                    id="002",
                    title="Performing acrobatics...",
                    status="in_progress",
                ),
            ],
        )

        time.sleep(4)

        streamer.stop(
            chunks=[
                TaskUpdateChunk(
                    id="12",
                    title="solved equation!",
                    status="complete",
                    sources=[
                        URLSource(
                            url="https://oeis.org",
                            text="The On-Line Encyclopedia of Integer Sequences (OEIS)",
                        ),
                    ],
                ),
                MarkdownTextChunk(text="that computes."),
            ],
        )

    except Exception as e:
        logger.exception(f"Failed to handle a user message event: {e}")
        say(f":warning: Something went wrong! ({e})")

raise e.SlackRequestError("Failed to stop stream: stream not started")
self._stream_ts = str(response["ts"])
self._state = "in_progress"
flushings: List[Union[Dict, Chunk]] = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😉 quite the visual name! 💩 Would chunks_to_flush also work? Perhaps too long.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mwbrooks I do appreciate this suggestion. Let's blame @claude for this one. But in a follow up with warnings appearing with append too, I agree we should change this!

recipient_team_id="T0123456789",
recipient_user_id="U0123456789",
thread_ts="123.000",
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A warning is a nice idea. I wouldn't consider it a blocker for the PR, if we want to move forward and add the warning later.

Copy link
Contributor

@WilliamBergamin WilliamBergamin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes look good to me, I left a few comments to make sure I understand the behavior properly but they should be non blocking 💯

Comment on lines +176 to +180
flushings: List[Union[Dict, Chunk]] = []
if len(self._buffer) != 0:
flushings.append(MarkdownTextChunk(text=self._buffer))
if chunks is not None:
flushings.extend(chunks)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! This allows the markdown_text and chunks to be accepted and passed to the method using the chunks param 💯

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WilliamBergamin The chunks argument might be preferred IIRC but I think markdown_text is still so useful for streaming LLM responses direct 👾

Comment on lines +69 to +70
markdown_text: Optional[str] = None,
chunks: Optional[Sequence[Union[Dict, Chunk]]] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since both arguments are now optional what happens if a developer does not pass any of these arguments when using the method?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WilliamBergamin A markdown_text_or_chunks_required error should appear from the API but I think @srtaalej and @mwbrooks mention that a warning might be useful to include with the SDK helper...

📝 Noting that we should follow up with this change!

Comment on lines +107 to +110
if markdown_text is not None:
self._buffer += markdown_text
if len(self._buffer) >= self._buffer_size or chunks is not None:
return self._flush_buffer(chunks=chunks, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm understanding this correctly if I do

streamer = client.chat_stream(
    channel="C0123456789",
    recipient_team_id="T0123456789",
    recipient_user_id="U0123456789",
    thread_ts="10101010101.010101",
)

streamer.append(markdown_text="\n")

streamer.append(
    chunks=[
        MarkdownTextChunk(
            text="Hello.\nI have received the task. ",
        ),
    ],
)
  • The first append with markdown will not be sent to Slack but rather added to the buffer
  • The second append with the chunks will be sent to Slack along with what ever may be in the buffer regarless of if the buffer has exceeded the buffer_size?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WilliamBergamin Correct! Providing chunks is almost like a forced flush of the buffer 🪬

Base automatically changed from zimeg-feat-ai-apps-chunks to feat-ai-apps-thinking-steps January 16, 2026 23:04
@zimeg
Copy link
Member Author

zimeg commented Jan 16, 2026

@srtaalej @mwbrooks @WilliamBergamin Oncemore the kind reviews are so appreciated! 🙏 ✨

I'm holding off on more changes to this PR for testing soonest, but will promise these to be included in upcoming commits.

@zimeg zimeg merged commit f4c0182 into feat-ai-apps-thinking-steps Jan 16, 2026
16 checks passed
@zimeg zimeg deleted the zimeg-feat-ai-apps-chunks-streamer branch January 16, 2026 23:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement M-T: A feature request for new functionality semver:minor web-client

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants